259 research outputs found

    On Macroscopic Complexity and Perceptual Coding

    Full text link
    The theoretical limits of 'lossy' data compression algorithms are considered. The complexity of an object as seen by a macroscopic observer is the size of the perceptual code which discards all information that can be lost without altering the perception of the specified observer. The complexity of this macroscopically observed state is the simplest description of any microstate comprising that macrostate. Inference and pattern recognition based on macrostate rather than microstate complexities will take advantage of the complexity of the macroscopic observer to ignore irrelevant noise

    Fast Autocorrelated Context Models for Data Compression

    Full text link
    A method is presented to automatically generate context models of data by calculating the data's autocorrelation function. The largest values of the autocorrelation function occur at the offsets or lags in the bitstream which tend to be the most highly correlated to any particular location. These offsets are ideal for use in predictive coding, such as predictive partial match (PPM) or context-mixing algorithms for data compression, making such algorithms more efficient and more general by reducing or eliminating the need for ad-hoc models based on particular types of data. Instead of using the definition of the autocorrelation function, which considers the pairwise correlations of data requiring O(n^2) time, the Weiner-Khinchin theorem is applied, quickly obtaining the autocorrelation as the inverse Fast Fourier transform of the data's power spectrum in O(n log n) time, making the technique practical for the compression of large data objects. The method is shown to produce the highest levels of performance obtained to date on a lossless image compression benchmark.Comment: v2 includes bibliograph

    Paradox of Peroxy Defects and Positive Holes in Rocks Part II: Outflow of Electric Currents from Stressed Rocks

    Get PDF
    Understanding the electrical properties of rocks is of fundamental interest. We report on currents generated when stresses are applied. Loading the center of gabbro tiles, 30x30x0.9 cm3^3, across a 5 cm diameter piston, leads to positive currents flowing from the center to the unstressed edges. Changing the constant rate of loading over 5 orders of magnitude from 0.2 kPa/s to 20 MPa/s produces positive currents, which start to flow already at low stress levels, <5 MPa. The currents increase as long as stresses increase. At constant load they flow for hours, days, even weeks and months, slowly decreasing with time. When stresses are removed, they rapidly disappear but can be made to reappear upon reloading. These currents are consistent with the stress-activation of peroxy defects, such as O3_3Si-OO-SiO3_3, in the matrix of rock-forming minerals. The peroxy break-up leads to positive holes hβˆ™^{\bullet}, i.e. electronic states associated with Oβˆ’^- in a matrix of O2βˆ’^{2-}, plus electrons, e'. Propagating along the upper edge of the valence band, the holes are able to flow from stressed to unstressed rock, traveling fast and far by way of a phonon-assisted electron hopping mechanism using energy levels at the upper edge of the valence band. Impacting the tile center leads to hβˆ™^{\bullet} pulses, 4-6 ms long, flowing outward at ~100 m/sec at a current equivalent to 1-2 x 109^9 A/km3^3. Electrons, trapped in the broken peroxy bonds, are also mobile, but only within the stressed volume.Comment: 33 pages, 19 figure

    Critical Data Compression

    Full text link
    A new approach to data compression is developed and applied to multimedia content. This method separates messages into components suitable for both lossless coding and 'lossy' or statistical coding techniques, compressing complex objects by separately encoding signals and noise. This is demonstrated by compressing the most significant bits of data exactly, since they are typically redundant and compressible, and either fitting a maximally likely noise function to the residual bits or compressing them using lossy methods. Upon decompression, the significant bits are decoded and added to a noise function, whether sampled from a noise model or decompressed from a lossy code. This results in compressed data similar to the original. For many test images, a two-part image code using JPEG2000 for lossy coding and PAQ8l for lossless coding produces less mean-squared error than an equal length of JPEG2000. Computer-generated images typically compress better using this method than through direct lossy coding, as do many black and white photographs and most color photographs at sufficiently high quality levels. Examples applying the method to audio and video coding are also demonstrated. Since two-part codes are efficient for both periodic and chaotic data, concatenations of roughly similar objects may be encoded efficiently, which leads to improved inference. Applications to artificial intelligence are demonstrated, showing that signals using an economical lossless code have a critical level of redundancy which leads to better description-based inference than signals which encode either insufficient data or too much detail.Comment: 99 pages, 31 figure

    Auto-Concealment of Supersymmetry in Extra Dimensions

    Full text link
    In supersymmetric (SUSY) theories with extra dimensions the visible energy in sparticle decays can be significantly reduced and its energy distribution broadened, thus significantly weakening the present collider limits on SUSY. The mechanism applies when the lightest supersymmetric particle (LSP) is a bulk state-- e.g. a bulk modulino, axino, or gravitino-- the size of the extra dimensions larger than ~10βˆ’1410^{-14} cm, and for a broad variety of visible sparticle spectra. In such cases the lightest ordinary supersymmetric particle (LOSP), necessarily a brane-localised state, decays to the Kaluza-Klein (KK) discretuum of the LSP. This dynamically realises the compression mechanism for hiding SUSY as decays into the more numerous heavier KK LSP states are favored. We find LHC limits on right-handed slepton LOSPs evaporate, while LHC limits on stop LOSPs weaken to ~350-410 GeV compared to ~700 GeV for a stop decaying to a massless LSP. Similarly, for the searches we consider, present limits on direct production of degenerate first and second generation squarks drop to ~450 GeV compared to ~800 GeV for a squark decaying to a massless LSP. Auto-concealment typically works for a fundamental gravitational scale of Mβˆ—M_*~10-100 TeV, a scale sufficiently high that traditional searches for signatures of extra dimensions are mostly avoided. If superpartners are discovered, their prompt, displaced, or stopped decays can also provide new search opportunities for extra dimensions with the potential to reach Mβˆ—M_*~10910^9 GeV. This mechanism applies more generally than just SUSY theories, pertaining to any theory where there is a discrete quantum number shared by both brane and bulk sectors.Comment: 22 pages, 13 figures. Minor changes to match published versio

    Non-personal services to provide metering effort at NAB, Little Creek, VA

    Get PDF
    Issued as Progress reports no. 1-4, and Final report, Project no. A-289

    Radiative Trapping and Hyperfine Structure: HCN

    Get PDF
    The anomalous weakness of the F = 1 β†’ 1 hyperfine component in the J = 1 β†’ 0 emission of interstellar HCN can be caused by radiative trapping in the J = 2 β†’ 1 lines. The anomaly is readily produced if the J = 1 levels are populated largely by collisional excitation from J = 0 to J = 2 followed by radiative decay to J = 1 with the J = 2 β†’ 1 lines optically thick. Regions where the anomaly is found probably have H_2 densities less than 10s^5 cm^(-3) and optical depths in the J =1 β†’ 0 lines greater than 50

    On the Origin of the 10 Micron Depressions in the Spectra of Compact Infrared Sources

    Get PDF
    The 10 Β΅ depression observed in the spectrum of a compact infrared object is usually ascribed to absorption by intervening cold silicate grains, and the underlying source spectrum is taken to be either a blackbody or a blackbody with superposed excess 10 Β΅ emission. We question this assumption of the underlying source spectrum for optically thick compact sources. We find, upon modeling both the objects BN and W3 IRS5, that the source actually emits less at the 10 Β΅ resonance than outside the resonance, so that a depression at 10 Β΅ already exists in the source spectrum. This difference in emission arises because, due to the higher opacity in the resonance, the observed 10 Β΅ radiation is produced further out in the source than is the radiation just outside the resonance. And the lower dust temperature further out gives rise to a weaker emission at 10 Β΅ than in the continuum. An observed 10 Β΅ depression can be largely due to this effect, and little or no intervening extinction is required. This explanation of the 10 Β΅ depression leads to a correlation such that the magnitude of depression will increase with decreasing color temperature of the source. It also predicts no depression at 20 Β΅ for sources with color temperatures greater than 200 K. Observations at 20 Β΅ would then be able to decide on the validity of this explanation
    • …
    corecore